21 research outputs found
Molecular dynamics simulations of bubble nucleation in dark matter detectors
Bubble chambers and droplet detectors used in dosimetry and dark matter
particle search experiments use a superheated metastable liquid in which
nuclear recoils trigger bubble nucleation. This process is described by the
classical heat spike model of F. Seitz [Phys. Fluids (1958-1988) 1, 2 (1958)],
which uses classical nucleation theory to estimate the amount and the
localization of the deposited energy required for bubble formation. Here we
report on direct molecular dynamics simulations of heat-spike-induced bubble
formation. They allow us to test the nanoscale process described in the
classical heat spike model. 40 simulations were performed, each containing
about 20 million atoms, which interact by a truncated force-shifted
Lennard-Jones potential. We find that the energy per length unit needed for
bubble nucleation agrees quite well with theoretical predictions, but the
allowed spike length and the required total energy are about twice as large as
predicted. This could be explained by the rapid energy diffusion measured in
the simulation: contrary to the assumption in the classical model, we observe
significantly faster heat diffusion than the bubble formation time scale.
Finally we examine {\alpha}-particle tracks, which are much longer than those
of neutrons and potential dark matter particles. Empirically, {\alpha} events
were recently found to result in louder acoustic signals than neutron events.
This distinction is crucial for the background rejection in dark matter
searches. We show that a large number of individual bubbles can form along an
{\alpha} track, which explains the observed larger acoustic amplitudes.Comment: 7 pages, 5 figures, accepted for publication in Phys. Rev. E, matches
published versio
A new strategy for matching observed and simulated lensing galaxies
The study of strong-lensing systems conventionally involves constructing a mass distribution that can reproduce the observed multiply imaging properties. Such mass reconstructions are generically non-unique. Here, we present an alternative strategy: instead of modelling the mass distribution, we search cosmological galaxy-formation simulations for plausible matches. In this paper, we test the idea on seven well-studied lenses from the SLACS survey. For each of these, we first pre-select a few hundred galaxies from the EAGLE simulations, using the expected Einstein radius as an initial criterion. Then, for each of these pre-selected galaxies, we fit for the source light distribution, while using MCMC optimization for the placement and orientation of the lensing galaxy, so as to reproduce the multiple images and arcs. The results indicate that the strategy is feasible and can easily reject unphysical galaxy-formation scenarios. It even yields relative posterior probabilities of two different galaxy-formation scenarios, though these are not statistically significant yet. Extensions to other observables, such as kinematics and colours of the stellar population in the lensing galaxy, are straightforward in principle, though we have not attempted it yet. Scaling to arbitrarily large numbers of lenses also appears feasible. This will be especially relevant for upcoming wide-field surveys, through which the number of galaxy lenses will rise possibly a hundredfold, which will overwhelm conventional modelling methods
Lessons from a blind study of simulated lenses: image reconstructions do not always reproduce true convergence
In the coming years, strong gravitational lens discoveries are expected to
increase in frequency by two orders of magnitude. Lens-modelling techniques are
being developed to prepare for the coming massive influx of new lens data, and
blind tests of lens reconstruction with simulated data are needed for
validation. In this paper we present a systematic blind study of a sample of 15
simulated strong gravitational lenses from the EAGLE suite of hydrodynamic
simulations. We model these lenses with a free-form technique and evaluate
reconstructed mass distributions using criteria based on shape, orientation,
and lensed image reconstruction. Especially useful is a lensing analogue of the
Roche potential in binary star systems, which we call the . This we introduce in order to factor out the well-known
problem of steepness or mass-sheet degeneracy. Einstein radii are on average
well recovered with a relative error of for quads and
for doubles; the position angle of ellipticity is on average also reproduced
well up to , but the reconstructed mass maps tend to be too
round and too shallow. It is also easy to reproduce the lensed images, but
optimising on this criterion does not guarantee better reconstruction of the
mass distribution.Comment: 20 pages, 12 figures. Published in MNRAS. Agrees with published
versio
The lens SW05 J143454.4+522850: a fossil group at redshift 0.6?
Fossil groups are considered the end product of natural galaxy group evolution in which group members sink towards the centre of the gravitational potential due to dynamical friction, merging into a single, massive, and X-ray bright elliptical. Since gravitational lensing depends on the mass of a foreground object, its mass concentration, and distance to the observer, we can expect lensing effects of such fossil groups to be particularly strong. This paper explores the exceptional system J143454.4+522850 (with a lens redshift zL = 0.625). We combine gravitational lensing with stellar population synthesis to separate the total mass of the lens into stars and dark matter. The enclosed mass profiles are contrasted with state-of-the-art galaxy formation simulations, to conclude that SW05 is likely a fossil group with a high stellar to dark matter mass fraction (0.027 ± 0.003) with respect to expectations from abundance matching (0.012 ± 0.004), indicative of a more efficient conversion of gas into stars in fossil groups
Large expert-curated database for benchmarking document similarity detection in biomedical literature search
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe